72 research outputs found

    Random Access Protocols with Collision Resolution in a Noncoherent Setting

    Full text link
    Wireless systems are increasingly used for Machine-Type Communication (MTC), where the users sporadically send very short messages. In such a setting, the overhead imposed by channel estimation is substantial, thereby demanding noncoherent communication. In this paper we consider a noncoherent setup in which users randomly access the medium to send short messages to a common receiver. We propose a transmission scheme based on Gabor frames, where each user has a dedicated codebook of M possible codewords, while the codebook simultaneously serves as an ID for the user. The scheme is used as a basis for a simple protocol for collision resolution.Comment: 5 pages, 3 figures; EDIT: A version of this work has been submitted for publication in the IEEE Wireless Communication Letters Journa

    A Pre-log Region for the Non-coherent MIMO Two-Way Relaying Channel

    Get PDF
    We study the two-user MIMO block fading two-way relay channel in the non-coherent setting, where neither the terminals nor the relay have knowledge of the channel realizations. We analyze the achievable sum-rate when the users employ independent, isotropically distributed, unitary input signals, with amplify-and-forward (AF) strategy at the relay node. As a byproduct, we present an achievable pre-log region of the AF scheme, defined as the limiting ratio of the rate region to the logarithm of the signal-to-noise ratio (SNR) as the SNR tends to infinity. We compare the performance with time-division-multiple-access (TDMA) schemes, both coherent and non-coherent. The analysis is supported by a geometric interpretation, based on the paradigm of subspace-based communication

    DynamoRep: Trajectory-Based Population Dynamics for Classification of Black-box Optimization Problems

    Get PDF
    The application of machine learning (ML) models to the analysis of optimization algorithms requires the representation of optimization problems using numerical features. These features can be used as input for ML models that are trained to select or to configure a suitable algorithm for the problem at hand. Since in pure black-box optimization information about the problem instance can only be obtained through function evaluation, a common approach is to dedicate some function evaluations for feature extraction, e.g., using random sampling. This approach has two key downsides: (1) It reduces the budget left for the actual optimization phase, and (2) it neglects valuable information that could be obtained from a problem-solver interaction. In this paper, we propose a feature extraction method that describes the trajectories of optimization algorithms using simple descriptive statistics. We evaluate the generated features for the task of classifying problem classes from the Black Box Optimization Benchmarking (BBOB) suite. We demonstrate that the proposed DynamoRep features capture enough information to identify the problem class on which the optimization algorithm is running, achieving a mean classification accuracy of 95% across all experiments.Comment: 9 pages, 5 figure

    Explainable Model-specific Algorithm Selection for Multi-Label Classification

    Full text link
    Multi-label classification (MLC) is an ML task of predictive modeling in which a data instance can simultaneously belong to multiple classes. MLC is increasingly gaining interest in different application domains such as text mining, computer vision, and bioinformatics. Several MLC algorithms have been proposed in the literature, resulting in a meta-optimization problem that the user needs to address: which MLC approach to select for a given dataset? To address this algorithm selection problem, we investigate in this work the quality of an automated approach that uses characteristics of the datasets - so-called features - and a trained algorithm selector to choose which algorithm to apply for a given task. For our empirical evaluation, we use a portfolio of 38 datasets. We consider eight MLC algorithms, whose quality we evaluate using six different performance metrics. We show that our automated algorithm selector outperforms any of the single MLC algorithms, and this is for all evaluated performance measures. Our selection approach is explainable, a characteristic that we exploit to investigate which meta-features have the largest influence on the decisions made by the algorithm selector. Finally, we also quantify the importance of the most significant meta-features for various domains

    OPTION: OPTImization Algorithm Benchmarking ONtology

    Full text link
    Many optimization algorithm benchmarking platforms allow users to share their experimental data to promote reproducible and reusable research. However, different platforms use different data models and formats, which drastically complicates the identification of relevant datasets, their interpretation, and their interoperability. Therefore, a semantically rich, ontology-based, machine-readable data model that can be used by different platforms is highly desirable. In this paper, we report on the development of such an ontology, which we call OPTION (OPTImization algorithm benchmarking ONtology). Our ontology provides the vocabulary needed for semantic annotation of the core entities involved in the benchmarking process, such as algorithms, problems, and evaluation measures. It also provides means for automatic data integration, improved interoperability, and powerful querying capabilities, thereby increasing the value of the benchmarking data. We demonstrate the utility of OPTION, by annotating and querying a corpus of benchmark performance data from the BBOB collection of the COCO framework and from the Yet Another Black-Box Optimization Benchmark (YABBOB) family of the Nevergrad environment. In addition, we integrate features of the BBOB functional performance landscape into the OPTION knowledge base using publicly available datasets with exploratory landscape analysis. Finally, we integrate the OPTION knowledge base into the IOHprofiler environment and provide users with the ability to perform meta-analysis of performance data

    Algorithm Instance Footprint: Separating Easily Solvable and Challenging Problem Instances

    Full text link
    In black-box optimization, it is essential to understand why an algorithm instance works on a set of problem instances while failing on others and provide explanations of its behavior. We propose a methodology for formulating an algorithm instance footprint that consists of a set of problem instances that are easy to be solved and a set of problem instances that are difficult to be solved, for an algorithm instance. This behavior of the algorithm instance is further linked to the landscape properties of the problem instances to provide explanations of which properties make some problem instances easy or challenging. The proposed methodology uses meta-representations that embed the landscape properties of the problem instances and the performance of the algorithm into the same vector space. These meta-representations are obtained by training a supervised machine learning regression model for algorithm performance prediction and applying model explainability techniques to assess the importance of the landscape features to the performance predictions. Next, deterministic clustering of the meta-representations demonstrates that using them captures algorithm performance across the space and detects regions of poor and good algorithm performance, together with an explanation of which landscape properties are leading to it.Comment: To appear at GECCO 202
    corecore